\documentclass[runningheads,a4paper]{llncs} \usepackage[margin=1.54in]{geometry} \usepackage{amssymb,amsmath,authblk} \setcounter{tocdepth}{3} \usepackage{graphicx} \usepackage{tikz} \usepackage{color} \usepackage{url} %\urldef{\mailsa}\path|daniel.smith@nist.gov| %\urldef{\mailsb}\path|ray.perlner@nist.gov| %\urldef{\mailsc}\path|dustin.moody@nist.gov| \newcommand{\keywords}[1]{\par\addvspace\baselineskip \noindent\keywordname\enspace\ignorespaces#1} %===============Definitions========================== \newcommand{\bdf}{\begin{definition}} \newcommand{\edf}{\end{definition}} \newcommand{\beq}{\begin{equation}} \newcommand{\eeq}{\end{equation}} \newcommand{\bsp}{\begin{split}} \newcommand{\esp}{\end{split}} \newtheorem{Thm}{Theorem} \newtheorem{Lem}{Lemma} \newtheorem{Cor}{Corollary} \newtheorem{Prop}{Proposition}[chapter] \newtheorem{Def}{Definition} \newtheorem{Rem}{Remark} \newcommand{\Z}{\mathbb{Z}} \def\mathbi#1{\textbf{\em #1}} %===================End Definitions====================== %===============Perspective Definitions===================== %\def\xa{108} %\def\ya{-106} %\def\za{80} \def\xdirx{-0.996} \def\xdiry{0.087} \def\ydirx{0.614} \def\ydiry{0.43} \def\zdirx{0.0} \def\zdiry{1} %=============End Perspective Definitions==================== %==============Color Definitions======================= \definecolor{lightgray}{rgb}{0.85,0.85,0.85} \definecolor{mediumgray}{rgb}{0.75,0.75,0.75} \definecolor{darkgray}{rgb}{0.45,0.45,0.45} %============End Color Definitions====================== %===============Notation Macros===================== \newcommand\restr[2]{{% we make the whole thing an ordinary symbol \left.\kern-\nulldelimiterspace % automatically resize the bar with \right #1 % the function \vphantom{\big|} % pretend it's a little taller at normal size \right|_{#2} % this is the delimiter }} \newcommand{\rkappa}{\raisebox{\depth}{\rotatebox{270}{$\mathcal{K}$}}} %=============End Notation Macros==================== \begin{document} \mainmatter \title{Thermodynamic Analysis of Classical and Quantum Algorithms for Preimage and Collision Search Problems} \titlerunning{Thermodynamic Analysis of Search Problems} \toctitle{Lecture Notes in Computer Science} \tocauthor{Authors' Instructions} \maketitle \begin{abstract} One of the earliest proposed applications of quantum computers was the quadratic speedup of classical brute force search represented by Grover's algorithm. Since then various successor algorithms have been applied to a variety of oracle problems including collision finding and claw finding. Such algorithms have typically been analyzed in terms of query complexity, which is a somewhat unphysical model of the cost of computation in the real world. Other models of these algorithms, such as the quantum circuit model and the quantum random access machine model of computation give more realistic, but sometimes conflicting answers regarding how much advantage the quantum algorithm provides over the corresponding classical algorithm. We instead adapt Bennett's Brownian model of computation to directly estimate the cost of both classical and quantum operations in terms of time, energy, and memory. We apply this model to compare the best known quantum algorithms for collision search and preimage search to their classical counterparts. In the collision search case, our analysis agrees with previous analysis, based on the gate model of quantum computation, suggesting that quantum computation provides no improvement over the best known classical algorithm. More surprisingly, we find that a Brownian implementation of randomized classical search can achieve the same tradeoffs between time, memory and energy as Grover's algorithm (at least up to logarithmic factors.) This implementation uses thermal noise to drive a random walk within the internal state of a mostly unpowered circuit. \keywords{Reversible Computation, Quantum Computation, Collision Search, Preimage Search, Grover's algorithm} \end{abstract} \section{Introduction} \section{Bennett's Brownian Computation Model} In contrast to ballistic models of computation, which are generally assumed to be unrealistic, Brownian computers are assumed to operate near thermal equilibrium at a finite temperature, $T$. As in the case of ballistic computers, to avoid being constrained to consume at least $kT ln 2$ energy per bit operation by the Landauer limit, the program for a Brownian computer is encoded as a reversible circuit. In the absence of any driving force, the state of the computing system at any given time may be described as a random walk on that circuit, with state transitions that undo a useful computation happening as often as those that perform it. In order for computations to proceed forward at a nonzero expected rate, a driving force, dissipating an energy of $\epsilon$ per gate is imposed. This leads forward transitions within the circuit to occur $e^{\frac{\epsilon}{kT}}$ times more often than backward transitions, resulting in a net forward computation rate proportional to $\frac{\epsilon}{kT}$ for $\epsilon$ small compared to $kT$. Some additional costs are required in order for Brownian computation to be achieved fault-tolerantly. Energy barriers must be imposed to prevent transitions to physical states outside the reversible circuit, representing the prescribed computation path. In order to suppress the probability of such undesirable transitions so that a circuit of size $G$ can be completed with high probability, the size of these energy barriers must at least be on the order of $kT ln (\frac{kT}{\epsilon}\cdot G).$ Additionally, dissipating a ``latching" energy of about $kT ln (\frac{kT}{\epsilon})$ during the computation's final step is required to suppress backwards transitions once the computation has reached its halting state. These costs may, however, be assumed to be negligible in a number of important cases: In particular, the latching energy will be negligible when $\frac{\epsilon}{kT}$ is at least logarithmically more than $\frac{1}{G}$. Additionally, establishing energy barriers to non-computational paths is likely to be a negligible cost when a description of the circuit can be expressed in a physically compact form, for example, using looping constructs. More precisely, if we assume that the circuit can be compressed into a program of size $m_0$, then the cost of imposing energy barriers should be on the order of $m_0\cdot kT ln (\frac{kT}{\epsilon}\cdot G).$ This cost is negligible as long as $\frac{\epsilon}{kT}$ is significantly larger than $e^{-\frac{G}{m_0}}$. In fact, the initialization cost may be less than this, since it may be more realistic to think of the initialization process as rearranging the energy barriers already present in the available raw materials for constructing our computer. The cost is then determined by the Landauer limit and the information content of the circuit, including appropriately large energy barriers. Since the size of these barriers does not need to be precislely specified, but merely bounded above $kT ln (\frac{kT}{\epsilon}\cdot G)$, the information content of the circuit may grow sublogarithmically with $G$. All we can say with confidence is that the information content of the circuit is at least $m_0$, and therefore the initialization energy is at least on the order of $m_0\cdot kT$. Thus far, we have only given asymptotic scalings for the relation between gate time and per-gate energy, assuming a fixed temperature. However, if we make the heuristic assumption that the rate at which gates are traversed due to brownain motion is no more than $\frac{h}{4kT}$, following the Margolus-Levitin theorem, then we can specify a lower bound, independent of temperature, on the per-gate energy $\epsilon$ required to perform $G$ sequential operations in time $t$: \[ \epsilon > \frac{hG}{4t}. \] Finally, it is worth commenting on the feasibility of Brownian computation for quantum computers. Brownian computation was originally proposed as a way to improve the thermodynamic efficiency of classical computation. It should be noted that many of the techniques that have been proposed for fault tolerance in quantum compuation are thermodynamically irreversible, in particular, syndrome measurement and magic state preparation. These techniques cannot be used in a Brownian mode of computation. However, there are some proposed techniques, such as the use of Fibonacci anyons for universal quantum computation, that may be able to achieve fault tolerance without requiring significant thermodynamic irreversibility (although even in such cases, the cost of fault tolerance is believed to be polylogarithmic in the size of the circuit.) We will therefore optimistically assume that quantum operations can be implemented in a Brownian fashion. We now proceed to analyze the asymptotic complexity of classical and quantum algorithms for collision and preimage search. We will generally ignore logarithmic factors. As we are engaged in asymptotic analysis, units are strictly speaking irrelevant, but assuming natural units (e.g. $c = \hbar = k = 1$) may be desirable to keep constant factors small. \section{Collision Search} The best known classical algorithm for finding collisions in a random function is the parallel collision search algorithm of Van Oorschot and Wiener. If the range of the function is of order $N$, then, given $M$ parallel processes each with memory $O(1)$ the algorithm can find a collision in expected serial depth $O(\frac{\sqrt{N}}{M})$. The communication cost between threads is negligible compared to overall computational costs as long as $M$ is smaller than $\sqrt{N}$ by at least a logarithmic factor. An improvement over classical collision search has been claimed by Brassard H{\o}yer and Tapp (BHT). Their algorithm is a serial process consisting of $O(N^{\frac{1}{3}})$ operations and requires a memory of size $N^{\frac{1}{3}}$. This can be generalized to arbitrary memory size, $M